299 Publications

An adaptive spectral method for oscillatory second-order linear ODEs with frequency-independent cost

F. Agocs, A. Barnett

We introduce an efficient numerical method for second-order linear ODEs whose solution may vary between highly oscillatory and slowly changing over the solution interval. In oscillatory regions the solution is generated via a nonoscillatory phase function that obeys the nonlinear Riccati equation. We propose a defect correction iteration that gives an asymptotic series for such a phase function; this is numerically approximated on a Chebyshev grid with a small number of nodes. For analytic coefficients we prove that each iteration, up to a certain maximum number, reduces the residual by a factor of order of the local frequency. The algorithm adapts both the stepsize and the choice of method, switching to a conventional spectral collocation method away from oscillatory regions. In numerical experiments we find that our proposal outperforms other state-of-the-art oscillatory solvers, most significantly at low to intermediate frequencies and at low tolerances, where it may use up to \(10^6\) times fewer function evaluations. Even in high-frequency regimes, our implementation is on average 10 times faster than other specialized solvers.

Show Abstract

Good rates from bad coordinates: the exponential average time-dependent rate approach

Nicodemo Mazzaferro, Subarna Sasmal, P. Cossio, Glen M. Hocky

Our ability to calculate rates of biochemical processes using molecular dynamics simulations is severely limited by the fact that the time scales for reactions, or changes in conformational state, scale exponentially with the relevant free-energy barriers. In this work, we improve upon a recently proposed rate estimator that allows us to predict transition times with molecular dynamics simulations biased to rapidly explore one or several collective variables. This approach relies on the idea that not all bias goes into promoting transitions, and along with the rate, it estimates a concomitant scale factor for the bias termed the collective variable biasing efficiency γ. First, we demonstrate mathematically that our new formulation allows us to derive the commonly used Infrequent Metadynamics (iMetaD) estimator when using a perfect collective variable, γ=1. After testing it on a model potential, we then study the unfolding behavior of a previously well characterized coarse-grained protein, which is sufficiently complex that we can choose many different collective variables to bias, but which is sufficiently simple that we are able to compute the unbiased rate dire ctly. For this system, we demonstrate that our new Exponential Average Time-Dependent Rate (EATR) estimator converges to the true rate more rapidly as a function of bias deposition time than does the previous iMetaD approach, even for bias deposition times that are short. We also show that the γ parameter can serve as a good metric for assessing the quality of the biasing coordinate. Finally, we demonstrate that the approach works when combining multiple less-than-optimal bias coordinates.

Show Abstract
March 15, 2024

Active learning of Boltzmann samplers and potential energies with quantum mechanical accuracy

Ana Molina-Taborda, P. Cossio, et al.

Extracting consistent statistics between relevant free-energy minima of a molecular system is essential for physics, chemistry and biology. Molecular dynamics (MD) simulations can aid in this task but are computationally expensive, especially for systems that require quantum accuracy. To overcome this challenge, we develop an approach combining enhanced sampling with deep generative models and active learning of a machine learning potential (MLP). We introduce an adaptive Markov chain Monte Carlo framework that enables the training of one Normalizing Flow (NF) and one MLP per state, achieving rapid convergence towards the Boltzmann distribution. Leveraging the trained NF and MLP models, we compute thermodynamic observables such as free-energy differences or optical spectra. We apply this method to study the isomerization of an ultrasmall silver nanocluster, belonging to a set of systems with diverse applications in the fields of medicine and catalysis.

Show Abstract
2024

Nested R̂ : Assessing the convergence of Markov chain Monte Carlo when running many short chains

C. Margossian, Matthew D. Hoffman, Pavel Sountsov, Lionel Riou-Durand, Aki Vehtari, Andrew Gelman

Recent developments in Markov chain Monte Carlo (MCMC) algorithms allow us to run thousands of chains in parallel almost as quickly as a single chain, using hardware accelerators such as GPUs. While each chain still needs to forget its initial point during a warmup phase, the subsequent sampling phase can be shorter than in classical settings, where we run only a few chains. To determine if the resulting short chains are reliable, we need to assess how close the Markov chains are to their stationary distribution after warmup. The potential scale reduction factor Rˆ is a popular convergence diagnostic but unfortunately can require a long sampling phase to work well. We present a nested design to overcome this challenge and a generalization called nested Rˆ. This new diagnostic works under conditions similar to Rˆ and completes the workflow for GPU-friendly samplers. In addition, the proposed nesting provides theoretical insights into the utility of Rˆ, in both classical and short-chains regimes.

Show Abstract

Uniform approximation of common Gaussian process kernels using equispaced Fourier grids

A. Barnett, Philip Greengard, Ph.D., M. Rachh

The high efficiency of a recently proposed method for computing with Gaussian processes relies on expanding a (translationally invariant) covariance kernel into complex exponentials, with frequencies lying on a Cartesian equispaced grid. Here we provide rigorous error bounds for this approximation for two popular kernels—Matérn and squared exponential—in terms of the grid spacing and size. The kernel error bounds are uniform over a hypercube centered at the origin. Our tools include a split into aliasing and truncation errors, and bounds on sums of Gaussians or modified Bessel functions over various lattices. For the Matérn case, motivated by numerical study, we conjecture a stronger Frobenius-norm bound on the covariance matrix error for randomly-distributed data points. Lastly, we prove bounds on, and study numerically, the ill-conditioning of the linear systems arising in such regression problems.

Show Abstract

Decomposing imaginary time Feynman diagrams using separable basis functions: Anderson impurity model strong coupling expansion

J. Kaye, H. Strand, D. Golez

We present a deterministic algorithm for the efficient evaluation of imaginary time diagrams based on the recently introduced discrete Lehmann representation (DLR) of imaginary time Green's functions. In addition to the efficient discretization of diagrammatic integrals afforded by its approximation properties, the DLR basis is separable in imaginary time, allowing us to decompose diagrams into linear combinations of nested sequences of one-dimensional products and convolutions. Focusing on the strong coupling bold-line expansion of generalized Anderson impurity models, we show that our strategy reduces the computational complexity of evaluating an $M$th-order diagram at inverse temperature $\beta$ and spectral width $\omega_{\max}$ from $\mathcal{O}((\beta \omega_{\max})^{2M-1})$ for a direct quadrature to $\mathcal{O}(M (\log (\beta \omega_{\max}))^{M+1})$, with controllable high-order accuracy. We benchmark our algorithm using third-order expansions for multi-band impurity problems with off-diagonal hybridization and spin-orbit coupling, presenting comparisons with exact diagonalization and quantum Monte Carlo approaches. In particular, we perform a self-consistent dynamical mean-field theory calculation for a three-band Hubbard model with strong spin-orbit coupling representing a minimal model of Ca$_2$RuO$_4$, demonstrating the promise of the method for modeling realistic strongly correlated multi-band materials. For both strong and weak coupling expansions of low and intermediate order, in which diagrams can be enumerated, our method provides an efficient, straightforward, and robust black-box evaluation procedure. In this sense, it fills a gap between diagrammatic approximations of the lowest order, which are simple and inexpensive but inaccurate, and those based on Monte Carlo sampling of high-order diagrams.

Show Abstract

Explainable Equivariant Neural Networks for Particle Physics: PELICAN

A. Bogatskii, Timothy Hoffman, David W. Miller, Jan T. Offermann, Xiaoyang Liu

PELICAN is a novel permutation equivariant and Lorentz invariant or covariant aggregator network designed to overcome common limitations found in architectures applied to particle physics problems. Compared to many approaches that use non-specialized architectures that neglect underlying physics principles and require very large numbers of parameters, PELICAN employs a fundamentally symmetry group-based architecture that demonstrates benefits in terms of reduced complexity, increased interpretability, and raw performance. We present a comprehensive study of the PELICAN algorithm architecture in the context of both tagging (classification) and reconstructing (regression) Lorentz-boosted top quarks, including the difficult task of specifically identifying and measuring the $W$-boson inside the dense environment of the Lorentz-boosted top-quark hadronic final state. We also extend the application of PELICAN to the tasks of identifying quark-initiated vs.~gluon-initiated jets, and a multi-class identification across five separate target categories of jets. When tested on the standard task of Lorentz-boosted top-quark tagging, PELICAN outperforms existing competitors with much lower model complexity and high sample efficiency. On the less common and more complex task of 4-momentum regression, PELICAN also outperforms hand-crafted, non-machine learning algorithms. We discuss the implications of symmetry-restricted architectures for the wider field of machine learning for physics.

Show Abstract

Sharp error estimates for target measure diffusion maps with applications to the committor problem

Shashank Sule, L. Evans, Maria Cameron

We obtain asymptotically sharp error estimates for the consistency error of the Target Measure Diffusion map (TMDmap) (Banisch et al. 2020), a variant of diffusion maps featuring importance sampling and hence allowing input data drawn from an arbitrary density. The derived error estimates include the bias error and the variance error. The resulting convergence rates are consistent with the approximation theory of graph Laplacians. The key novelty of our results lies in the explicit quantification of all the prefactors on leading-order terms. We also prove an error estimate for solutions of Dirichlet BVPs obtained using TMDmap, showing that the solution error is controlled by consistency error. We use these results to study an important application of TMDmap in the analysis of rare events in systems governed by overdamped Langevin dynamics using the framework of transition path theory (TPT). The cornerstone ingredient of TPT is the solution of the committor problem, a boundary value problem for the backward Kolmogorov PDE. Remarkably, we find that the TMDmap algorithm is particularly suited as a meshless solver to the committor problem due to the cancellation of several error terms in the prefactor formula. Furthermore, significant improvements in bias and variance errors occur when using a quasi-uniform sampling density. Our numerical experiments show that these improvements in accuracy are realizable in practice when using $\delta$-nets as spatially uniform inputs to the TMDmap algorithm.

Show Abstract

Microscopic Theory, Analysis, and Interpretation of Conductance Histograms in Molecular Junctions

Leopoldo Mejía, P. Cossio, Ignacio Franco

Molecular electronics break-junction experiments are widely used to investigate fundamental physics and chemistry at the nanoscale. Reproducibility in these experiments relies on measuring conductance on thousands of freshly formed molecular junctions, yielding a broad histogram of conductance events. Experiments typically focus on the most probable conductance, while the information content of the conductance histogram has remained unclear. Here we develop a microscopic theory for the conductance histogram by merging the theory of force-spectroscopy with molecular conductance. The procedure yields analytical equations that accurately fit the conductance histogram of a wide range of molecular junctions and augments the information content that can be extracted from them. Our formulation captures contributions to the conductance dispersion due to conductance changes during the mechanical elongation inherent to the experiments. In turn, the histogram shape is determined by the non-equilibrium stochastic features of junction rupture and formation. The microscopic parameters in the theory capture the junction’s electromechanical properties and can be isolated from separate conductance and rupture force (or junction-lifetime) measurements. The predicted behavior can be used to test the range of validity of the theory, understand the conductance histograms, design molecular junction experiments with enhanced resolution and molecular devices with more reproducible conductance properties.

Show Abstract

Bayesian spatial modelling of localised SARS-CoV-2 transmission through mobility networks across England

Thomas Ward, Mitzi Morris , Andrew Gelman, B. Carpenter, William Ferguson, Christopher Overton, Martyn Fyles

In the early phases of growth, resurgent epidemic waves of SARS-CoV-2 incidence have been characterised by localised outbreaks. Therefore, understanding the geographic dispersion of emerging variants at the start of an outbreak is key for situational public health awareness. Using telecoms data, we derived mobility networks describing the movement patterns between local authorities in England, which we have used to inform the spatial structure of a Bayesian BYM2 model. Surge testing interventions can result in spatio-temporal sampling bias, and we account for this by extending the BYM2 model to include a random effect for each timepoint in a given area. Simulated-scenario modelling and real-world analyses of each variant that became dominant in England were conducted using our BYM2 model at local authority level in England. Simulated datasets were created using a stochastic metapopulation model, with the transmission rates between different areas parameterised using telecoms mobility data. Different scenarios were constructed to reproduce real-world spatial dispersion patterns that could prove challenging to inference, and we used these scenarios to understand the performance characteristics of the BYM2 model. The model performed better than unadjusted test positivity in all the simulation-scenarios, and in particular when sample sizes were small, or data was missing for geographical areas. Through the analyses of emerging variant transmission across England, we found a reduction in the early growth phase geographic clustering of later dominant variants as England became more interconnected from early 2022 and public health interventions were reduced. We have also shown the recent increased geographic spread and dominance of variants with similar mutations in the receptor binding domain, which may be indicative of convergent evolution of SARS-CoV-2 variants.

Show Abstract
  • Previous Page
  • Viewing
  • Next Page
Advancing Research in Basic Science and MathematicsSubscribe to Flatiron Institute announcements and other foundation updates